Scene text editing (STE) aims to replace text with the desired one while preserving background and styles of the original text. However, due to the complicated background textures and various text styles, existing methods fall short in generating clear and legible edited text images. In this study, we attribute the poor editing performance to two problems: 1) Implicit decoupling structure. Previous methods of editing the whole image have to learn different translation rules of background and text regions simultaneously. 2) Domain gap. Due to the lack of edited real scene text images, the network can only be well trained on synthetic pairs and performs poorly on real-world images. To handle the above problems, we propose a novel network by MOdifying Scene Text image at strokE Level (MOSTEL). Firstly, we generate stroke guidance maps to explicitly indicate regions to be edited. Different from the implicit one by directly modifying all the pixels at image level, such explicit instructions filter out the distractions from background and guide the network to focus on editing rules of text regions. Secondly, we propose a Semi-supervised Hybrid Learning to train the network with both labeled synthetic images and unpaired real scene text images. Thus, the STE model is adapted to real-world datasets distributions. Moreover, two new datasets (Tamper-Syn2k and Tamper-Scene) are proposed to fill the blank of public evaluation datasets. Extensive experiments demonstrate that our MOSTEL outperforms previous methods both qualitatively and quantitatively. Datasets and code will be available at https://github.com/qqqyd/MOSTEL.
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
translated by 谷歌翻译
选择第一次到达的Prestack收集时间被称为首次到达时间(FAT)采摘,这是地震数据处理中必不可少的一步,并且主要是手动解决的。随着当前地震数据收集密度的增加,手动采摘效率无法满足实际需求。因此,近几十年来,自动采摘方法已经大大开发出来,尤其是基于深度学习的方法。但是,当前有监督的基于深度学习的方法很少可以避免对标记样品的依赖。此外,由于收集数据是一组与自然图像大不相同的信号,因此当前方法在低信号与噪声比(SNR)的情况下很难解决脂肪拾取问题。在本文中,对于Hard Rock地震收集数据,我们提出了一个多阶段分割拾取网络(MSSPN),该网络解决了跨工作地点的概括问题以及在低SNR的情况下的采摘问题。在MSSPN中,有四个子模型可以模拟手动采摘处理,从而将其假定为从粗糙到细的四个阶段。具有不同质量的七个现场数据集的实验表明,我们的MSSPN的表现优于大幅度的基准。尤其是,在中等和高snrs的情况下,我们的方法可以实现超过90 \%的精确拾取,甚至精细模型也可以使用低SNR实现88 \%精确的数据集。
translated by 谷歌翻译
人类视频运动转移(HVMT)的目的是鉴于源头的形象,生成了模仿驾驶人员运动的视频。 HVMT的现有方法主要利用生成对抗网络(GAN),以根据根据源人员图像和每个驾驶视频框架估计的流量来执行翘曲操作。但是,由于源头,量表和驾驶人员之间的巨大差异,这些方法始终会产生明显的人工制品。为了克服这些挑战,本文提出了基于gan的新型人类运动转移(远程移动)框架。为了产生逼真的动作,远遥采用了渐进的一代范式:它首先在没有基于流动的翘曲的情况下生成每个身体的零件,然后将所有零件变成驾驶运动的完整人。此外,为了保留自然的全球外观,我们设计了一个全球对齐模块,以根据其布局与驾驶员的规模和位置保持一致。此外,我们提出了一个纹理对准模块,以使人的每个部分都根据纹理的相似性对齐。最后,通过广泛的定量和定性实验,我们的远及以两个公共基准取得了最先进的结果。
translated by 谷歌翻译
非凸松弛方法已被广泛用于张量恢复问题,并且与凸松弛方法相比,可以实现更好的恢复结果。在本文中,提出了一种新的非凸函数,最小值对数凹点(MLCP)函数,并分析了其某些固有属性,其中有趣的是发现对数函数是MLCP的上限功能。所提出的功能概括为张量病例,得出张量MLCP和加权张量$ l \ gamma $ -norm。考虑到将其直接应用于张量恢复问题时无法获得其明确解决方案。因此,给出了解决此类问题的相应等效定理,即张量等效的MLCP定理和等效加权张量$ l \ gamma $ -norm定理。此外,我们提出了两个基于EMLCP的经典张量恢复问题的模型,即低秩量张量完成(LRTC)和张量稳健的主组件分析(TRPCA)以及设计近端替代线性化最小化(棕榈)算法以单独解决它们。此外,基于Kurdyka - {\ l} ojasiwicz属性,证明所提出算法的溶液序列具有有限的长度并在全球范围内收敛到临界点。最后,广泛的实验表明,提出的算法取得了良好的结果,并证实MLCP函数确实比最小化问题中的对数函数更好,这与理论特性的分析一致。
translated by 谷歌翻译
张量恢复是计算机视觉和机器学习中的重要问题。它通常使用张量排名的凸松弛和$ l_ {0} $ norm,即分别为核定标准和$ l_ {1} $ norm,以解决此类问题。已知凸的近似值会产生偏置的估计量。为了克服这个问题,采用并设计了相应的非凸照器。受到最近开发的矩阵等效最小值凸额(EMCP)定理的启发,本文确定了张量当量的最小值 - concave惩罚(TEMCP)的定理。张量当量MCP(TEMCP)作为非凸照正规器零件和等效加权张量$ \ gamma $ norm(EWTGN)作为低级别部分的构建,两者都可以实现权重适应性。同时,我们提出了两个相应的自适应模型,用于两个经典的张量恢复问题,即低级张量完成(LRTC)和张量鲁棒的主成分分析(TRPCA),其中优化算法基于交替的方向乘数(ADMM)。设计了这种新型的迭代自适应算法,可以产生更准确的张量恢复效果。对于张量的完成模型,考虑了多光谱图像(MSI),磁共振成像(MRI)和彩色视频(CV)数据,而对于张量的稳定性主成分分析模型,高光谱图像(HSI)在高斯噪声和盐和盐和盐和盐和盐和盐和盐和盐和盐和考虑了胡椒噪声。所提出的算法优于ARTS方法,并且通过实验保证其降低和收敛性。
translated by 谷歌翻译
最近,基于模板的跟踪器已成为领先的跟踪算法,在效率和准确性方面具有希望的性能。然而,查询特征与给定模板之间的相关操作仅利用准确的目标本地化,导致状态估计误差,特别是当目标遭受严重可变形变化时。为了解决这个问题,已经提出了基于分段的跟踪器,以便使用每像素匹配来有效地提高可变形物体的跟踪性能。然而,大多数现有跟踪器仅指初始帧中的目标特征,从而缺乏处理具有挑战性因素的辨别能力,例如,类似的分心,背景杂乱,外观变化等。在此目的,我们提出了一种动态的紧凑型存储器嵌入以增强基于分段的可变形视觉跟踪方法的辨别。具体而言,我们初始化与第一帧中的目标功能嵌入的内存嵌入。在跟踪过程中,与现有内存具有高相关的当前目标特征被更新为在线嵌入的内存。为了进一步提高可变形对象的分割精度,我们采用了点对集的匹配策略来测量像素 - 方向查询特征和整个模板之间的相关性,以捕获更详细的变形信息。关于六个具有挑战性的跟踪基准的广泛评估,包括VOT2016,VOT2018,VOT2019,GOT-10K,TrackingNet和莱斯特展示了我们对近期近似追踪者的方法的优势。此外,我们的方法优于基于出色的基于分段的跟踪器,即DVIS2017基准测试。
translated by 谷歌翻译
自动图像处理算法可以提高分类异构碳酸盐岩石形态的质量,效率和一致性,可以无缝地处理大量的数据和图像。地质学家面临困难在设定从岩石图像,微计算断层扫描(UCT)或磁共振成像(MRI)中确定岩石物理性质的最佳方法的方向。大多数成功的工作是来自同质岩石,专注于2D图像,较少关注3D并需要数值模拟。目前,图像分析方法会聚到三种方法:图像处理,人工智能和具有人工智能的组合图像处理。在这项工作中,我们提出了两种方法来确定3D UCT和MRI图像的孔隙率:具有图像分辨率的图像处理方法优化高斯算法(IROGA);高斯随机森林机器学习差异(MLDGRF)启用先进的图像识别方法。我们已经建立了参考3D微型模型和收集的图像以校准Iroga和MLDGRF方法。为了评估这些校准方法的预测能力,我们在3D UCT和天然异质碳酸盐岩的MRI图像上运行它们。我们分别测量了三种行业标准方式的碳酸盐岩的孔隙度和岩性,分别为参考值。值得注意的是,与三种实验测量相比,IROGA和MLDGRF的精度产生96.2%和97.1%的精度为96.2%和97.1%,91.7%和94.4%。我们使用两种方法,X射线粉末衍射和晶粒密度测量测量石灰石和硫铁矿参考值。 MLDGRF生产岩性(石灰石和硫铁矿)卷,精度为97.7%。
translated by 谷歌翻译
张量稀疏建模是一种有希望的方法,在整个科学和工程学中,取得了巨大的成功。众所周知,实际应用中的各种数据通常由多种因素产生,因此使用张量表示包含多个因素内部结构的数据。但是,与矩阵情况不同,构建合理的稀疏度量张量是一项相对困难且非常重要的任务。因此,在本文中,我们提出了一种称为张量全功能度量(FFM)的新张量稀疏度度量。它可以同时描述张量的每个维度的特征信息以及两个维度之间的相关特征,并将塔克等级与张量管等级连接。这种测量方法可以更全面地描述张量的稀疏特征。在此基础上,我们建立了其非凸放松,并将FFM应用于低级张量完成(LRTC)和张量鲁棒的主成分分析(TRPCA)。提出了基于FFM的LRTC和TRPCA模型,并开发了两种有效的交替方向乘数法(ADMM)算法来求解所提出的模型。各种实际数值实验证实了超出最先进的方法的优势。
translated by 谷歌翻译